The imprecise nature of 3D printing limits this technology’s use beyond prototyping. In order to reliably use this technology for aerospace applications and aid certification of printed parts, the quality and repeatability of the parts must be improved. Much of the difficulty in obtaining high quality printed parts lies in finding optimum printing parameters. Currently, this requires much trial and error and expertise in the 3D printing process and materials engineering.
We wish to create machine learning models that take in object parameters from the CAD design and optimize the printing parameters throughout the print to increase the printed part’s fidelity to the CAD design. More specifically, in this internship we aim to: collect the data, train models, and use the predicted optimum local printing parameters to print a part with high fidelity and repeatability.
This is the workflow for reproducing the work by Kevin Hunt and Austin Ebel (and classification work by Sean Zylich and Evan Rose).
Overview
The part we have chosen to print is “3 sides of a cube”. This is a very simple 3D geometry.
Export as an .STL
This .STL file has very few vertices. We decided to use the vertices (X,Y,Z) as our data points of interest as obtaining them for any part is rather easy (just parse the STL file). However, we need more vertices than OpenSCAD created for this part. More vertices can be obtained by refining the model using NetFabb.
NetFabb has an option to import a mesh, and, for our purposes, define a maximum edge length to refine the model. We used a student version of this software for this project.
A step-by-step process goes as follows:
Under the new menu select:
Setting the first value, Max. Edge Length, to 0.1mm (or the desired max).
It’s important afterwards to click “Apply Repair” in the bottom left and remove the old part.
Now, go to:
as the default STL is a binary file which isn’t compatible for the next part in our workflow.
Refining the model with NetFabb with 2mm maximum edge length (not 0.1mm).
The reason we have chosen 0.1mm for the maximum edge length is to ensure that, later on, every gcode segment maps almost exactly to an STL vertex.
In addition to XYZ coordinates, we also expect the part geometry to affect optimum printing parameters. Thus, we wanted to obtain geometric information for each vertex (XYZ data point).
A software package known as Libigl was incredibly useful for this part, as it was designed specifically to obtain geometric data from meshes. They provide an in-depth tutorial and installation instructions here:
http://libigl.github.io/libigl/tutorial/
with their official GitHub page here:
https://github.com/libigl/libigl/
Within Libigl, we looked specifically at the vertex normals, Gaussian Curvature, and Discrete Mean Curvature (along with X, Y, Z coordinates). While Libigl is written in C++, it has bindings in Python which made working these values into the training data much easier later on.
Libigl has done a fantastic job at minimizing the amount of code you need to write. In addition, there are Python tutorials on their Github for each of the three types of geometric data we’ve used.
For example, code to calculate and view the vertex normals looks as follows:
import sys, os
# Add the igl library to the module's search path
sys.path.insert(0, '/Users/abebel/libigl/python/')
import pyigl as igl # Libigl's python bindings
stl_path = '/Users/abebel/Desktop/refined_cube.stl'
# Libigl uses the eigen class:
# More info found here: https://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html
OV = igl.eigen.MatrixXd()
OF = igl.eigen.MatrixXi()
def compute_vertex_normals():
# Syntax for computing normals
igl.per_vertex_normals(V, F, igl.PER_VERTEX_NORMALS_WEIGHTING_TYPE_AREA, N_vertices)
return N_vertices
if __name__ == "__main__":
V = igl.eigen.MatrixXd()
F = igl.eigen.MatrixXi()
SVI = igl.eigen.MatrixXi()
SVJ = igl.eigen.MatrixXi()
N_vertices = igl.eigen.MatrixXd()
# Read in mesh and remove duplicate vertices, since multiple triangles
# can contain the same vertex - reducing redundant training data
igl.read_triangle_mesh(stl_path, OV, OF)
igl.remove_duplicate_vertices(OV, OF, 0.0, V, SVI, SVJ, F)
N_vertices = compute_vertex_normals()
# Vertex normals will be stored in N_vertices
print(N_vertices)
Code for all the geometric data is also below:
import sys, os
# Add the igl library to the modules search path
sys.path.insert(0, '/Users/abebel/libigl/python/')
import pyigl as igl
stl_path = '/Users/abebel/Desktop/refined_cube.stl'
OV = igl.eigen.MatrixXd()
OF = igl.eigen.MatrixXi()
# Compute vertex normals
def compute_vertex_normals():
igl.per_vertex_normals(V, F, igl.PER_VERTEX_NORMALS_WEIGHTING_TYPE_AREA, N_vertices)
return N_vertices
# Compute Gaussian Curvature
def compute_gaussian():
K = igl.eigen.MatrixXd()
igl.gaussian_curvature(V, F, K)
return K
# Compute Discrete Mean Curvature - in Libigl's tutorial
def compute_dmc():
HN = igl.eigen.MatrixXd()
L = igl.eigen.SparseMatrixd()
M = igl.eigen.SparseMatrixd()
Minv = igl.eigen.SparseMatrixd()
igl.cotmatrix(V, F, L)
igl.massmatrix(V, F, igl.MASSMATRIX_TYPE_VORONOI, M)
igl.invert_diag(M, Minv)
# Laplace-Beltrami of position
HN = -Minv * (L * V)
# Extract magnitude as mean curvature
H = HN.rowwiseNorm()
# Compute curvature directions via quadric fitting
PD1 = igl.eigen.MatrixXd()
PD2 = igl.eigen.MatrixXd()
PV1 = igl.eigen.MatrixXd()
PV2 = igl.eigen.MatrixXd()
igl.principal_curvature(V, F, PD1, PD2, PV1, PV2)
# Mean curvature
H = 0.5 * (PV1 + PV2)
return H
if __name__ == "__main__":
V = igl.eigen.MatrixXd()
F = igl.eigen.MatrixXi()
SVI = igl.eigen.MatrixXi()
SVJ = igl.eigen.MatrixXi()
N_vertices = igl.eigen.MatrixXd()
igl.read_triangle_mesh(stl_path, OV, OF)
igl.remove_duplicate_vertices(OV, OF, 0.0, V, SVI, SVJ, F)
N_vertices = compute_vertex_normals()
K = compute_gaussian() # K stores Gaussian Curvature for each vertex
H = compute_dmc() # H stores DMC values for each vertex
C = igl.eigen.MatrixXd()
igl.jet(H, True, C) # Convert into RGB values
# Libigl's viewer
viewer = igl.glfw.Viewer()
viewer.data().set_mesh(V, F)
viewer.data().set_colors(C)
viewer.launch()
Set up to view the Discrete Mean Curvature values, what appears should look something like this:
Discrete Mean Curvature Visualization (on 2mm edge length mesh)
In addition to the geometric data obtained from Libigl, we felt angles between the Z axis, as well as print direction might also be important to include. Angles between the Z axis might be able to better tell us about overhangs, and print direction (defined as a vector facing out of the front of the printer) might give us information about how orientation affects quality.
The angles found between the Z axis can be seen in the following picture:
We can use the vertex normals found through Libigl to find both and and write functions that return these angles for every vertex in the STL file.
The print angle was found just by using the standard angle between two vectors equation:
Where is the vector - facing out of the print bed. In the code, compute_theta_vert() computes for each vertex, and compute_phi_vert() computes for each vertex.
The code for this looks as follows, and can also be visualized using Libigl’s viewer:
import numpy as np
import math
# Add the igl library to the modules search path
sys.path.insert(0, "/Users/abebel/libigl/python/")
import pyigl as igl
from iglhelpers import e2p
# Compute theta from vertical (shown in picture)
def compute_theta_vert():
theta_vert = igl.eigen.MatrixXd(len(numpy_vertices), 1)
# Iterate over entire array calculating theta for each vertex
for i in range(len(numpy_vertices)):
if numpy_vertices[i][1] == 0:
theta_vert[i] = 0
else:
theta_vert[i] = math.atan2(numpy_vertices[i][1], numpy_vertices[i][0])
return theta_vert
# Compute phi from vertical (shown in picture)
def compute_phi_vert():
phi_vert = igl.eigen.MatrixXd(len(numpy_vertices), 1)
for i in range(len(numpy_vertices)):
phi_vert[i] = math.acos(numpy_vertices[i][2] / np.linalg.norm(numpy_vertices[i]))
return phi_vert
# Compute print angle according to above equation
def compute_angle_print():
print_angle = np.array([0, 1, 0])
angle_print = igl.eigen.MatrixXd(len(numpy_vertices), 1)
for i in range(len(numpy_vertices)):
angle_print[i] = math.acos(np.dot(print_angle, numpy_vertices[i]) / np.linalg.norm(numpy_vertices[i]))
return angle_print
if __name__ == "__main__":
numpy_vertices = e2p(N_vertices) # Libigl's helper function to convert eigen matrices into numpy arrays
tv = compute_theta_vert()
pv = compute_phi_vert()
ap = compute_angle_print()
C = igl.eigen.MatrixXd()
igl.jet(tv, True, C) # Convert into RGB values
viewer = igl.glfw.Viewer()
viewer.data().set_mesh(V, F)
viewer.data().set_colors(C)
viewer.launch()
Later in the manual, we’ll talk about how these values are stored in our dataframes and how the training data is actually formatted.
The parameters of interest are the Nozzle Temperature, Bed Temperature, Print Speed, Extrusion Multiplier and Fan Speed. Nozzle temperature ranges were chosen to be 225 C to 245 C. In Rao 2015, it was recommended not to go below 225 to prevent clogging. We have also noticed material burning above 245. The ideal bed temperature is suggested to be 110 C. Therefore, we are testing 100-120 C. Print speed was chosen to vary between 20 ad 80 mm/sec based on print speeds suggested here. Rao 2015, suggested the extrusion multiplier does best around 100%, thus we’ve chosen to vary between 90-110%. The Fan Speed multiplier was chosen to be either 30, 45, or 80% based on John Gardner’s suggestions.
The parameter ranges we are interested in testing for Natural ABS:
nozzle.temp <- seq(225,245,10)
bed.temp <- seq(100,120,10)
print.speed <- seq(20,80,20)
extrusion.multiplier <- seq(90,110,10)
fan.speed.percent <- c(10,45,80)
Combine these into possible combinations:
parameter.list <- list(Nozzle.Temp=nozzle.temp, Bed.Temp=bed.temp,
Print.Speed=print.speed, Extrusion.Multiplier =extrusion.multiplier,
Fan.Speed.Percent=fan.speed.percent)
all.options <- expand.grid(parameter.list)
There are 324 possible combinations.
Randomly order the rows
set.seed(42)
randomized.all.options <- all.options[sample(nrow(all.options),nrow(all.options)),]
# reset row numbers
rownames(randomized.all.options) <- NULL
# save these combinations as a .csv
write.table("varied_parameters_print_order.txt", quote=F, x = randomized.all.options, col.names=FALSE)
| Nozzle.Temp | Bed.Temp | Print.Speed | Extrusion.Multiplier | Fan.Speed.Percent | |
|---|---|---|---|---|---|
| 1 | 245 | 120 | 20 | 110 | 80 |
| 2 | 245 | 110 | 40 | 110 | 80 |
| 3 | 245 | 100 | 60 | 110 | 10 |
| 4 | 245 | 110 | 40 | 100 | 80 |
| 5 | 235 | 120 | 60 | 110 | 45 |
| 6 | 225 | 110 | 60 | 100 | 45 |
We choose to randomly order the prints. If you look at the 3D representation of the problem surface, we could imagine the red to indicate an error while printing and the dark blue to indicate the highest fidelity. Imagine we do not randomly select the printing parameters, and our problem surface is built in a straight line up the hill leading to the worst print conditions. By randomly sampling, we are hoping to approximate the surface of the problem prior to completing all 324 possible prints.
| Pronterface.Temperature.Setting | Thermocouple.Reading | Difference | |
|---|---|---|---|
| 1 | 260 | 257 | 3.0 |
| 2 | 255 | 252 | 3.0 |
| 3 | 250 | 247 | 3.0 |
| 4 | 245 | 243 | 2.0 |
| 5 | 240 | 238 | 2.0 |
| 6 | 235 | 234 | 1.0 |
| 7 | 230 | 229 | 1.0 |
| 8 | 225 | 223 | 2.0 |
| 9 | 220 | 219 | 1.0 |
| 10 | 215 | 214 | 1.0 |
| 11 | 210 | 209 | 1.0 |
| 12 | 205 | 204 | 1.0 |
| 13 | 200 | 200 | 0.0 |
| 14 | 195 | 195.1 | -0.1 |
| 15 | 190 | 190.2 | -0.2 |
| 16 | 185 | 185.3 | -0.3 |
| 17 | 180 | 180.5 | -0.5 |
| 18 | 175 | 175.3 | -0.3 |
As seen above, there are 324 possible combinations, requiring 324 gcodes. A python script was written to parse a base.gcode into the 324 required gcodes. Here is the code:
with open("C:\\Users\\kahunt1\\Desktop\\varied_parameters_print_order.txt") as print_order_file:
for line in print_order_file:
parameter_list = line.split()
print_number = parameter_list[0]
nozzle_temperature = parameter_list[1]
bed_temperature = parameter_list[2]
print_speed = parameter_list[3]
extrusion_multiplier = parameter_list[4]
fan_speed_percent = parameter_list[5]
with open("C:\\Users\\kahunt1\\Documents\\OpenSCAD\\gcodes\\3 sided cube\\new_base_less_cooling_bed_temp_wait_time.gcode") as base_gcode:
# load in the base gcode
gcode_data = base_gcode.readlines()
# Edit the gcode based on the new parameters
# Set Nozzle Temp
gcode_data[62] = "M109 R" + nozzle_temperature + " ; wait for extruder to reach printing temp\n"
gcode_data[570] = "M104 S" + nozzle_temperature + "\n"
# Set Bed temp
gcode_data[15] = "M140 S" + bed_temperature + " ; start bed heating up\n"
gcode_data[63] = "M190 S" + bed_temperature + " ; wait for bed to reach printing temp\n"
gcode_data[576] = "M140 S" + bed_temperature + "\n"
# Set Print Speed and Extrusion Multiplier
make_changes = False
print_finished = False
line_counter = 0
previous_extrusion_multiplier = float(gcode_data[67][-4:-1])
for code_line in gcode_data:
# make sure outside of skirt
if code_line == ";TYPE:WALL-INNER\n":
make_changes = True
# make sure not at end
if "M104 S0" in code_line:
make_changes = False
# begin changing the line if it's not the skirt or after the print
if make_changes:
# if the line is an extrusion and movement command
if all(x in code_line for x in ['G1', 'X', 'Y', 'E']):
split_line = code_line.split()
# loop through the elements of the line
for i in range(0,len(split_line)):
# if the element is setting speed
if "F" in split_line[i]:
# change the speed
split_line[i] = 'F' + str(int(print_speed)*60)
# if the element is setting the extrusion amt
elif "E" in split_line[i]:
# grab the original extrusion amount
extruded_amount = split_line[i][1:]
# calculate the new extrusion amount
# original*new flow/previous flow
new_extrude_amount = round(float(extruded_amount)*float(extrusion_multiplier)/previous_extrusion_multiplier,5)
# TO DO: round to 5 decimals
split_line[i] = 'E' + str(new_extrude_amount)
# rewrite the line
gcode_data[line_counter] = " ".join(split_line) + '\n'
elif " E" in code_line:
split_up = code_line.split()
for i in range(0, len(split_up)):
if "E" in split_up[i]:
extrusion = float(split_up[i][1:])
new_extrude_amount = round(extrusion*float(extrusion_multiplier)/previous_extrusion_multiplier,5)
split_up[i] = "E" + str(new_extrude_amount)
gcode_data[line_counter] = " ".join(split_up) + '\n'
line_counter += 1
# Set Extrusion Multiplier
gcode_data[67] = "M221 S100\n"
# Set Fan Speed Percent
gcode_data[70] = "M106 S" + str(int(255*int(fan_speed_percent)/100)) + '\n'
# Add the parameter settings to the file as comments
print_parameters = [None] * 8
print_parameters[0] = "; GCode auto generated from a base CURA file\n"
print_parameters[1] = "; Kevin Hunt\n"
print_parameters[2] = "; Print Number " + print_number + ' of 324\n'
print_parameters[3] = "; Nozzle Temperature " + nozzle_temperature + ' C\n'
print_parameters[4] = "; Bed Temperature " + bed_temperature + ' C\n'
print_parameters[5] = "; Print Speed " + print_speed + ' mm/sec\n'
print_parameters[6] = "; Extrusion Multiplier " + extrusion_multiplier + ' %\n'
print_parameters[7] = "; Fan Speed " + fan_speed_percent + " %\n"
gcode_data = print_parameters + gcode_data
# write the new gcode to a file
new_filename = "parts_to_print\\3_sides_of_a_cube_print_" + print_number + ".gcode"
with open(new_filename, 'w') as new_gcode:
new_gcode.writelines(gcode_data)
At the time of printing, the ambient room temperature and ambient humidity was recorded. Prints are sent to the Lulzbot Mini printer using Pronterface.
As our end goal is to optimize parameters along the toolpath, but we’re gathering geometric data about the STL file, we needed an efficient way to convert STL vertex information into Gcode segment information. This involved several steps talked about below.
Our first challenge can be perfectly summed up in the picture below.
Rather hard to see, but STL vertices on the top and right side go beyond that of the toolpath. This is an issue as Gcode segments that lie on the corner might not receive the correct ‘corner-like’ information from Libigl.
The toolpath is adjusted so that, with how the material is deposited, it does not exceed our 50mm x 50mm x 50mm design. So the actual dimensions as specified by the toolpath are 49.5mm x 49.5mm x 49.4mm. In addition, the walls of our original STL file were 1mm wide, however Cura creates a toolpath along the walls of only 0.5mm.
We expect this will be less of an issue for less sharp and thin models. While our ‘3 sides of a cube’ is simple and easy to print, it’s somewhat of a double-edged sword as it also means in order to perfectly capture the geometries, we need to exactly align the STL file and Gcode segments.
We remedied this issue by simply adjusting the CAD model to reflect the Gcode path. Initially, we scaled all of the X, Y, Z coordinates to meet the Gcode toolpath, however this still left the walls ~0.5mm apart. We realize this isn’t the most generalizable option, however, again, we feel this issue is exacerpated in our model. In the future, it might be worth looking into better ways to match the toolpath with the STL file.
Secondly, our STL file needed to have the same coordinate system as the Gcode file. This way, we could directly associate geometric information from the STL file with Gcode segments. This was done by finding the minimum X, Y, Z coordinates of both the STL and Gcode files and finding the difference between each of them. Then, we added that difference to the STL file to obtain the same coordinate system.
Like we mentioned above, our STL model needed to be refined so that each vertex’s geometric data would fall extremely close to a gcode segment. Once refined to a max edge length of 0.1mm, there were around ~3,000,000 vertices in our STL file. Obtaining geometric data for all of the vertices took around 15 minutes.
In addition to refining the STL model, modifications must be made to the Gcode to ensure there are a proper number of points to actually train the model on. Furthermore, there’s no point in having ~3,000,000 vertices of geometric data if only 500 get mapped to Gcode segments. In our ‘3 sides of a cube’, some Gcode segments spanned the entire 50mm wall, looking like this:
We decided it was best to break up these Gcode segments into roughly 2mm each, because that could accurately map geometric data while also not going beyond the accuracy that Sean/Evan’s Neural Network could classify. These changes were then written back into the Gcode like so:
Where at each segment, we have the ability to specify changes in parameters (ultimately predicted and smoothed from our own Machine Learning algorithm).
The code for this looks as follows, however needs to be updated taking into account our predictions from the model:
import math
nozzle_temperature = "245"
bed_temperature = "100"
print_speed = "60"
extrusion_multiplier = "110"
fan_speed_percent = "10"
segment_size = 2 #mm
offset = 0 # lines
with open("C:\\Users\\kahunt1\\Documents\\OpenSCAD\\gcodes\\3 sided cube\\print_3_relative_extrusion.gcode") as base_gcode:
# load in the base gcode
gcode_data = base_gcode.readlines()
gcode_to_write = []
# Set the overall bed temperature
gcode_data[15+offset] = "M140 S" + bed_temperature + " ; start bed heating up\n"
gcode_data[63+offset] = "M190 S" + bed_temperature + " ; wait for bed to reach printing temp\n"
gcode_data[576+offset] = "M140 S" + bed_temperature + "\n"
# Set First Extrusion Multiplier
gcode_data[67+offset] = "M221 S100\n"
# Set First Fan Speed Percent
gcode_data[71+offset] = "M106 S" + str(round(255*int(fan_speed_percent)/100,1)) + '\n'
# Set Print Speed and Extrusion Multiplier
make_changes = False
print_finished = False
line_counter = 0
previous_extrusion_multiplier = float(gcode_data[67+offset][-4:-1])
X_prev = 0
Y_prev = 0
current_X = 0
current_Y = 0
for code_line in gcode_data:
line_to_write = code_line
# make sure outside of skirt
# TO DO: change this to instead check type not skirt
if code_line == ";TYPE:WALL-INNER\n":
make_changes = True
# make sure not at end
if "M104 S0" in code_line:
make_changes = False
# store the current X and Y
if "X" in code_line:
current_X = float([x[1:] for x in code_line.split() if "X" in x][0])
if " Y" in code_line:
current_Y = float([y[1:] for y in code_line.split() if "Y" in y][0])
#TO DO: prevent infill split up
# begin changing the line if it's not the skirt or after the print
if make_changes:
# if the line is an extrusion and movement command
if all(x in code_line for x in ['G1', 'X', 'Y', 'E']):
# grab the amount of extrudate
extrusion = float([e[1:] for e in code_line.split() if "E" in e][0])
# start segmenting the line up
# a^2 + b^2 = c^2
extrusion_distance = math.sqrt((current_X-X_prev)**2+(current_Y-Y_prev)**2)
# based on the segment sizes, how many whole segments will be needed
number_of_new_segments = max(1,round(extrusion_distance/segment_size))
# based on that many segments, how far do we have to step in the X and Y direction each time
X_step_distance = (current_X-X_prev)/number_of_new_segments
Y_step_distance = (current_Y-Y_prev)/number_of_new_segments
# store the new coordinates for the segments
new_X_coords = []
new_Y_coords = []
for i in range(0,number_of_new_segments-1):
new_X_coords.append(X_prev + X_step_distance*(i+1))
new_Y_coords.append(Y_prev + Y_step_distance*(i+1))
new_X_coords.append(current_X)
new_Y_coords.append(current_Y)
# segment the extrusion amount up
new_extrusion_amount = round(extrusion/number_of_new_segments,5)
# write these new instructions out
line_to_write = []
for i in range(0,number_of_new_segments):
# calculate the new extrusion amount
# original*new flow/previous flow
# new_extrude_amount = round(float(extruded_amount)*float(extrusion_multiplier)/previous_extrusion_multiplier,5)
# Set Fan Speed Percent
fan_speed_set = "M106 S" + str(int(255*float(fan_speed_percent)/100)) + ' ; Set Fan Speed Percent\n'
# Set Hot end temp
wait_hot_end = "M104 S" + nozzle_temperature + " ; Set Nozzle Temp and continue without waiting\n"
gcode_command = "G1 F" + str(int(print_speed)*60) + " X" + str(new_X_coords[i]) \
+ " Y" + str(new_Y_coords[i]) + " E" + str(new_extrusion_amount) + "\n"
line_to_write.extend([fan_speed_set, wait_hot_end, gcode_command])
# store the previous X and Y
X_prev = current_X
Y_prev = current_Y
gcode_to_write.extend(line_to_write)
# write the new gcode to a file
new_filename = "test_segmenting_no_temp_waiting_randomize_fan_speed.gcode"
with open(new_filename, 'w') as new_gcode:
new_gcode.writelines(gcode_to_write)
The next part of making the training data involved associating geometric data about STL points with Gcode segments. While this problem was slightly remedied by giving each file the same coordinate system, there’s absolutely no guarantee that every STL point will directly correspond to every other Gcode point.
This is the case because we’re segmenting the STL file only every 0.1mm, so there will always be that slight misalignment. In fact, only a few STL points directly corresponded to Gcode points, so we needed a way to find the closest vertex to each Gcode segment.
This was done through what’s known as a KD Tree. Information about how a KD Tree works can best be found here:
http://pointclouds.org/documentation/tutorials/kdtree_search.php
Scikit Learn - a fantastic package for all things Machine Learning in Python - uses a KD Tree for their K-Nearest Neighbors algorithm, thus we were able to take the functionality they provide and use it for our purposes. Their KD Tree implementation/examples can be found here:
http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html
The code below uses the KD Tree to associate STL vertices with Gcode segments. The remove_duplicates() function, well, removes the duplicate vertices in the STL file (as a result of multiple trianges containing the same vertex). The reorient() function adds the offset described in the ‘Correctly Align the Gcode and STL Files’ section above. Finally the KD Tree is created in the get_KDTree() function.
Code for all of this looks as follows:
# Given an STL file and spliced Gcode segments, this program matches each of
# the given GCode segments to the closest vertex in the STL file.
import sys, os
sys.path.insert(0, '/Users/abebel/libigl/python/')
import pandas as pd
from sklearn.neighbors import KDTree
import _pickle as cPickle
import pyigl as igl
from iglhelpers import e2p
import time
IMPORT_TREE = True
stl_path = '/Users/abebel/Desktop/refined_3-sided cube_scaled to match gcode.stl'
gcode_path = '/Users/abebel/Desktop/KD_Tree/segmented_gcode_points.txt'
def remove_duplicates():
# Use libigl's function to read in STL file
OV = igl.eigen.MatrixXd()
OF = igl.eigen.MatrixXi()
igl.read_triangle_mesh(stl_path, OV, OF)
V = igl.eigen.MatrixXd()
F = igl.eigen.MatrixXi()
SVI = igl.eigen.MatrixXi()
SVJ = igl.eigen.MatrixXi()
igl.remove_duplicate_vertices(OV, OF, 0.0, V, SVI, SVJ, F);
print("\nFinished removing duplicates.")
print("The mesh now has " + str(V.rows()) + " triangles.")
return V
def reorient(V):
vertices = e2p(V) # Libigl helper function to create numpy array from MatrixXd
df_vertices = pd.DataFrame(data = vertices, columns=['X', 'Y', 'Z'])
df_vertices['X'] = df_vertices['X'].astype(float)
df_vertices['Y'] = df_vertices['Y'].astype(float)
df_vertices['Z'] = df_vertices['Z'].astype(float)
min_stl_x = df_vertices['X'].min()
min_stl_y = df_vertices['Y'].min()
min_stl_z = df_vertices['Z'].min()
df_gcode_arr = pd.read_csv(gcode_path, names=['X', 'Y', 'Z'])
min_gcode_x = df_gcode_arr['X'].min()
min_gcode_y = df_gcode_arr['Y'].min()
min_gcode_z = df_gcode_arr['Z'].min()
# Calculate distance between min gcode values and min stl values
dist_x = min_gcode_x - min_stl_x
dist_y = min_gcode_y - min_stl_y
dist_z = min_gcode_z - min_stl_z
# Add that distance (above) as offset to each STL to have the STL file
# and G-code be in the same coordinate system
df_vertices['X'] += dist_x
df_vertices['Y'] += dist_y
df_vertices['Z'] += dist_z
print("Finished reorienting.")
return df_vertices, df_gcode_arr
def get_KDTree(df_vertices, df_gcode_arr):
if IMPORT_TREE == True:
tree = pd.read_pickle('/Users/abebel/Desktop/tree.pickle')
else:
tree = KDTree(df_vertices)
print("Saving tree as pickle file...", end='')
with open("/Users/abebel/Desktop/tree.pickle", "wb") as output_file:
cPickle.dump(tree, output_file)
print(" done.")
df_gcode = pd.DataFrame(columns=['index', 'gx', 'gy', 'gz', 'vx', 'vy', 'vz', 'distance'])
# Query KD Tree to find nearest vertex for every gcode segment
total_dist = 0
for i in range(len(df_gcode_arr.index)):
dist, ind = tree.query([[df_gcode_arr['X'][i].astype(float),
df_gcode_arr['Y'][i].astype(float),
df_gcode_arr['Z'][i].astype(float)]], k = 1)
df_gcode.loc[i] = [i+1,
df_gcode_arr['X'][i],
df_gcode_arr['Y'][i],
df_gcode_arr['Z'][i],
df_vertices.ix[ind[0][0], 'X'],
df_vertices.ix[ind[0][0], 'Y'],
df_vertices.ix[ind[0][0], 'Z'],
dist[0][0]
]
total_dist += dist[0][0]
average_dist = round(total_dist / len(df_gcode_arr.index), 5)
print("Average distance from vertex: ", average_dist)
return df_gcode
if __name__ == '__main__':
t0 = time.clock()
V = remove_duplicates()
df_vertices, df_gcode_arr = reorient(V)
df_gcode = get_KDTree(df_vertices, df_gcode_arr)
df_gcode.to_pickle("/Users/abebel/Desktop/df_gcode.pickle")
df_gcode.to_csv("/Users/abebel/Desktop/df_gcode.csv")
t1 = time.clock()
print("Run time: " + str(round(t1 - t0, 4)) + "\n")
Running this code above creates a dataframe (talked about in the next section) whose first 20 segments look like:
| Segment | Gcode X | Gcode Y | Gcode Z | Vertex X | Vertex Y | Vertex Z | Distance |
|---|---|---|---|---|---|---|---|
| 1 | 99.229 | 101.25 | 0.424 | 99.236 | 101.267 | 0.424 | 0.01808 |
| 2 | 97.208 | 101.25 | 0.424 | 97.206 | 101.267 | 0.424 | 0.01676 |
| 3 | 95.188 | 101.25 | 0.424 | 95.176 | 101.267 | 0.424 | 0.02032 |
| 4 | 93.167 | 101.25 | 0.424 | 93.146 | 101.267 | 0.424 | 0.02689 |
| 5 | 91.146 | 101.25 | 0.424 | 91.115 | 101.267 | 0.424 | 0.03481 |
| 6 | 89.125 | 101.25 | 0.424 | 89.133 | 101.218 | 0.424 | 0.03281 |
| 7 | 87.104 | 101.25 | 0.424 | 87.103 | 101.218 | 0.424 | 0.03176 |
| 8 | 85.083 | 101.25 | 0.424 | 85.073 | 101.218 | 0.424 | 0.03345 |
| 9 | 83.062 | 101.25 | 0.424 | 83.091 | 101.267 | 0.424 | 0.03283 |
| 10 | 81.042 | 101.25 | 0.424 | 81.061 | 101.267 | 0.424 | 0.02514 |
| 11 | 79.021 | 101.25 | 0.424 | 79.030 | 101.267 | 0.424 | 0.01910 |
| 12 | 77.000 | 101.25 | 0.424 | 77.000 | 101.267 | 0.424 | 0.01660 |
| 13 | 74.979 | 101.25 | 0.424 | 74.970 | 101.267 | 0.424 | 0.01910 |
| 14 | 72.958 | 101.25 | 0.424 | 72.939 | 101.267 | 0.424 | 0.02514 |
| 15 | 70.938 | 101.25 | 0.424 | 70.909 | 101.267 | 0.424 | 0.03283 |
| 16 | 68.917 | 101.25 | 0.424 | 68.927 | 101.218 | 0.424 | 0.03345 |
| 17 | 66.896 | 101.25 | 0.424 | 66.897 | 101.218 | 0.424 | 0.03176 |
| 18 | 64.875 | 101.25 | 0.424 | 64.867 | 101.218 | 0.424 | 0.03281 |
| 19 | 62.854 | 101.25 | 0.424 | 62.885 | 101.267 | 0.424 | 0.03481 |
| 20 | 60.833 | 101.25 | 0.424 | 60.854 | 101.267 | 0.424 | 0.02689 |
Where this dataframe is ~27,000 segments long. Including all segments, the average distance to the vertex is 0.04mm, which lines up with the fact that we segmented the STL file so that no edge is longer than 0.1mm.
In summary, this means that for each Gcode segment, we’re using geometric/part information of vertices only 0.04mm away on average.
Quick aside: Originally, we were segmenting the STL file every 2mm, however realized that we were often associating vertex information multiple layers above/below that gcode segment with that Gcode segment. This is obviously not ideal, as we really wanted each Gcode segment to be mapped to a vertex on the same level as the previous segment.
This is where we decided to further refine our model to a 0.1mm max edge length resulting in the ~3,000,000 vertices (from ~14,000). Dealing with this much data meant rethinking how we were storing the data; moving away from arrays to a more database-like approach with the Pandas DataFrame. Information about this can be found here:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html
Essentially this just gave us a massive performance boost in creating our training data as now we were using data structure made for incredibly large amounts of data like our own. You can see the database structure in action in the code above.
Now we have a data set that looks like: . This dataset contains 6,241 XYZ points on the outer surface of the 3 sides of a cube (chosen from NETFABB refinement) and their associated geometric properties (from Libigl) multiplied by the 324 prints. This results in 2.02208410^{6} data points. Each of these needs to be labelled with their error type: none, blob, warping, or delamination.
Sean and Evan are working on an image classifcation algorithm to automatically determine error type from pictures of the walls
Using the above dataset, we wish to train a machine learning model to predict the optimum printing parameters for each point on the part, given the printer XYZ, part XYZ and geometric properties.
The approach of giving all the input and getting all of the optimum printing parameters out at the same time is called Multi-Output, Multi-Target, Multi-Variate, or Multi-Response regression when continuous values are predicted (there are different names and techniques when the problem is a classification problem). This approach becomes more complicated than fitting a separate model to each output, because there are expected relationships between the outputs. The machine learning techniques best used for this approach are support vector regression, regression tress and neural networks. A good review by Borchani et al (2015) outlines this problem and recent attempts to improve the techniques.
Scikit-learn has a few models that natively support mutli-output regression:
| Nozzle.Temp | Bed.Temp | Print.Speed | Extrusion.Multiplier | Fan.Speed.Percent | Predicted.Likelihood.None.Error |
|---|---|---|---|---|---|
| 245 | 120 | 20 | 110 | 80 | 0.98 |
| 245 | 110 | 40 | 110 | 80 | 0.90 |
| 235 | 120 | 60 | 110 | 45 | 0.65 |
| 225 | 110 | 60 | 100 | 45 | 0.54 |
| 245 | 100 | 60 | 110 | 10 | 0.12 |
| 245 | 110 | 40 | 100 | 80 | 0.00 |
Here we can see the best predicted printing parameters is the first combination, but the second combination might also do well in printing. This option of being able to pick between parameter combinations that fall above some threshold prediction, might enable better smoothing logic when it’s time to print using the optimized local parameters.
A timing test using a pre-trained sklearn MLPRegressor model to predict over 2 million values completed the task in ~3 seconds.
Once we have the predicted optimum printing parameters, we will need to communicate those parameters to the 3D printer. This will be done through the gcode, but first we need to associate each segment of gcode with the correct printing parameters. Here you can see the STL points (black) overlaid on the toolpath (light blue). Note: The toolpath is not 50 mm x 50 mm x 50 mm. In fact, it goes 49.5 mm x 49.5 mm x 49.4 mm. Thus, the STL points were scaled to match the dimensions of the toolpath.
We wrote a python script to chop the toolpath up into 2 mm segments and associate those segments with the nearest vertex. Here we have colored the “gcode” by it’s associated vertex. This indicates that our mapping algorithm works as expected. Note: To produce this image, the gcode was cut up into 0.1 mm segments and mapped to the nearest vertex.
The segments of gcode nearby that share the same color would be printed with the same printing parameters.
Insert code for associating the segments with a vertex
Next, these printing parameters for each segment need to be communicated to the printer via gcode. Here is a script that takes in the original gcode, cuts up the lines, and rewrites the segmented gcode. All of the parameters of interest, besides Nozzle Temperature, can be incorporated instaneously into the print without compromising the printing process. The Nozzle temperature, however, can not be changed immediately and heating/cooling must be done in the background while the printing continues.
Once the data has been collected, the model has been trained and we have optimum local printing parameters, these two python scripts for splicing and matching the gcode with a vertex and rewriting the spliced gcode using the predicted parameters will be combined into one script.